I found this week to reinforce the idea that language models do not really understand what it is they are replying to or even saying, they are essentially giving an answer based on a probability.
I enjoyed the thought experiment of being stuck in a library with nothing but books in a 'foreign' language. It illustrated how these LLMS are legitimately incapable of understanding what it is that they are saying, as they do not understand the meaning behind language.
The Wolfram article made me reflect on my own experiences with using AI, where it would be caught in a loop, repeating the same phrase over and over, saying something like "and then you do this, and then you do this, and then you do this." It made me realize that from the model's perspective, this sentence is logical because it is acting simply on probability. The model has been taught that these words make sense when arranged in this manner.